Goto

Collaborating Authors

 cyber attack


AI firm claims Chinese spies used its tech to automate cyber attacks

BBC News

The makers of artificial intelligence (AI) chatbot Claude claim to have caught hackers sponsored by the Chinese government using the tool to perform automated cyber attacks against around 30 global organisations. Anthropic said hackers tricked the chatbot into carrying out automated tasks under the guise of carrying out cyber security research. The company claimed in a blog post this was the first reported AI-orchestrated cyber espionage campaign. But sceptics are questioning the accuracy of that claim - and the motive behind it. Anthropic said it discovered the hacking attempts in mid-September.


The true extent of cyber attacks on UK business - and the weak spots that allow them to happen

BBC News

The first day of September should have marked the beginning of one of the busiest periods of the year for Jaguar Land Rover. It was a Monday, and the release of new 75 series number plates was expected to produce a surge in demand from eager car buyers. At factories in Solihull and Halewood, as well as at its engine plant in Wolverhampton, staff were expecting to be working flat out. Instead, when the early shift arrived, they were sent home. The production lines have remained idle ever since.


JLR suppliers 'face bankruptcy' due to hack crisis

BBC News

JLR suppliers'face bankruptcy' due to hack crisis The past two weeks have been dreadful for Jaguar Land Rover (JLR), and the crisis at the car maker shows no sign of coming to an end. A cyber attack, which first came to light on 1 September, forced the manufacturer to shut down its computer systems and close production lines worldwide. Its factories in Solihull, Halewood, and Wolverhampton are expected to remain idle until at least Wednesday, as the company continues to assess the damage. JLR is thought to have lost at least £50m so far as a result of the stoppage. But experts say the most serious damage is being done to its network of suppliers, many of whom are small and medium sized businesses.


Machine Learning for Cyber-Attack Identification from Traffic Flows

Zhou, Yujing, Jacquet, Marc L., Dawit, Robel, Fabre, Skyler, Sarawat, Dev, Khan, Faheem, Newell, Madison, Liu, Yongxin, Liu, Dahai, Chen, Hongyun, Wang, Jian, Wang, Huihui

arXiv.org Artificial Intelligence

This paper presents our simulation of cyber-attacks and detection strategies on the traffic control system in Daytona Beach, FL. using Raspberry Pi virtual machines and the OPNSense firewall, along with traffic dynamics from SUMO and exploitation via the Metasploit framework. We try to answer the research questions: are we able to identify cyber attacks by only analyzing traffic flow patterns. In this research, the cyber attacks are focused particularly when lights are randomly turned all green or red at busy intersections by adversarial attackers. Despite challenges stemming from imbalanced data and overlapping traffic patterns, our best model shows 85\% accuracy when detecting intrusions purely using traffic flow statistics. Key indicators for successful detection included occupancy, jam length, and halting durations.


Cybersecurity Assessment of Smart Grid Exposure Using a Machine Learning Based Approach

Jeje, Mofe O.

arXiv.org Artificial Intelligence

Given that disturbances to the stable and normal operation of power systems have grown phenomenally, particularly in terms of unauthorized access to confidential and critical data, injection of malicious software, and exploitation of security vulnerabilities in a poorly patched software among others; then developing, as a countermeasure, an assessment solutions with machine learning capabilities to match up in real-time, with the growth and fast pace of these cyber-attacks, is not only critical to the security, reliability and safe operation of power system, but also germane to guaranteeing advanced monitoring and efficient threat detection. Using the Mississippi State University and Oak Ridge National Laboratory dataset, the study used an XGB Classifier modeling approach in machine learning to diagnose and assess power system disturbances, in terms of Attack Events, Natural Events and No-Events. As test results show, the model, in all the three sub-datasets, generally demonstrates good performance on all metrics, as it relates to accurately identifying and classifying all the three power system events.


Learning-based Detection of GPS Spoofing Attack for Quadrotors

Wang, Pengyu, Yang, Zhaohua, Li, Jialu, Shi, Ling

arXiv.org Artificial Intelligence

Safety-critical cyber-physical systems (CPS), such as quadrotor UAVs, are particularly prone to cyber attacks, which can result in significant consequences if not detected promptly and accurately. During outdoor operations, the nonlinear dynamics of UAV systems, combined with non-Gaussian noise, pose challenges to the effectiveness of conventional statistical and machine learning methods. To overcome these limitations, we present QUADFormer, an advanced attack detection framework for quadrotor UAVs leveraging a transformer-based architecture. This framework features a residue generator that produces sequences sensitive to anomalies, which are then analyzed by the transformer to capture statistical patterns for detection and classification. Furthermore, an alert mechanism ensures UAVs can operate safely even when under attack. Extensive simulations and experimental evaluations highlight that QUADFormer outperforms existing state-of-the-art techniques in detection accuracy.


Exploring reinforcement learning for incident response in autonomous military vehicles

Madsen, Henrik, Grov, Gudmund, Mancini, Federico, Baksaas, Magnus, Sommervoll, Åvald Åslaugson

arXiv.org Artificial Intelligence

Unmanned vehicles able to conduct advanced operations without human intervention are being developed at a fast pace for many purposes. Not surprisingly, they are also expected to significantly change how military operations can be conducted. To leverage the potential of this new technology in a physically and logically contested environment, security risks are to be assessed and managed accordingly. Research on this topic points to autonomous cyber defence as one of the capabilities that may be needed to accelerate the adoption of these vehicles for military purposes. Here, we pursue this line of investigation by exploring reinforcement learning to train an agent that can autonomously respond to cyber attacks on unmanned vehicles in the context of a military operation. We first developed a simple simulation environment to quickly prototype and test some proof-of-concept agents for an initial evaluation. This agent was then applied to a more realistic simulation environment and finally deployed on an actual unmanned ground vehicle for even more realism. A key contribution of our work is demonstrating that reinforcement learning is a viable approach to train an agent that can be used for autonomous cyber defence on a real unmanned ground vehicle, even when trained in a simple simulation environment.


Windows users are exposed to over 600 million cyber attacks every day

PCWorld

Microsoft recently released the Microsoft Digital Defense Report 2024, this year's edition of the company's annual cybersecurity report. In the 114-page document, Microsoft reveals -- among other things -- just how much cyber threats have grown over the past year. Cybercriminals have gained access to better resources, including the incorporation of AI tools to bolster their arsenal. They're now better equipped to create fake images, videos, and audio recordings to trick people, to flood job applications with AI-created "perfect" résumés to physically access companies, and much more. But hackers can also use your use of AI services to attack you.

  Country:

Is Generative AI the Next Tactical Cyber Weapon For Threat Actors? Unforeseen Implications of AI Generated Cyber Attacks

Usman, Yusuf, Upadhyay, Aadesh, Gyawali, Prashnna, Chataut, Robin

arXiv.org Artificial Intelligence

In an era where digital threats are increasingly sophisticated, the intersection of Artificial Intelligence and cybersecurity presents both promising defenses and potent dangers. This paper delves into the escalating threat posed by the misuse of AI, specifically through the use of Large Language Models (LLMs). This study details various techniques like the switch method and character play method, which can be exploited by cybercriminals to generate and automate cyber attacks. Through a series of controlled experiments, the paper demonstrates how these models can be manipulated to bypass ethical and privacy safeguards to effectively generate cyber attacks such as social engineering, malicious code, payload generation, and spyware. By testing these AI generated attacks on live systems, the study assesses their effectiveness and the vulnerabilities they exploit, offering a practical perspective on the risks AI poses to critical infrastructure. We also introduce Occupy AI, a customized, finetuned LLM specifically engineered to automate and execute cyberattacks. This specialized AI driven tool is adept at crafting steps and generating executable code for a variety of cyber threats, including phishing, malware injection, and system exploitation. The results underscore the urgency for ethical AI practices, robust cybersecurity measures, and regulatory oversight to mitigate AI related threats. This paper aims to elevate awareness within the cybersecurity community about the evolving digital threat landscape, advocating for proactive defense strategies and responsible AI development to protect against emerging cyber threats.


Bill Gates hails AI as a 'wonderful' technology that can save humans from climate change and disease - but warns it needs to be used 'by people with good intent'

Daily Mail - Science & tech

Tech giant Microsoft is one of the many companies embracing AI. So it's perhaps ironic that Microsoft's co-founder – the multi-billionaire Bill Gates – has given a warning over its potential dangers. Speaking in London this week, Gates called AI a'wonderful' technology that can save humans from climate change and disease. But he warned that it needs to be used'by people with good intent', as it could be used by criminals'engaged in cyber attacks or political interference'. Gates, one of the 10 richest humans in the world, said: 'The defence has to be smarter than the offence.